76 research outputs found

    The problem of programming language concurrency semantics

    Get PDF
    Despite decades of research, we do not have a satisfactory concurrency semantics for any general-purpose programming language that aims to support concurrent systems code. The Java Memory Model has been shown to be unsound with respect to standard compiler optimisations, while the C/C++11 model is too weak, admitting undesirable thin-air executions. Our goal in this paper is to articulate this major open problem as clearly as is currently possible, showing how it arises from the combination of multiprocessor relaxed-memory behaviour and the desire to accommodate current compiler optimisations. We make several novel contributions that each shed some light on the problem, constraining the possible solutions and identifying new difficulties. First we give a positive result, proving in HOL4 that the existing axiomatic model for C/C++11 guarantees sequentially consistent semantics for simple race-free programs that do not use low-level atomics (DRF-SC, one of the core design goals). We then describe the thin-air problem and show that it cannot be solved, without restricting current compiler optimisations, using any per-candidate-execution condition in the style of the C/C++11 model. Thin-air executions were thought to be confined to programs using relaxed atomics, but we further show that they recur when one attempts to integrate the concurrency model with more of C, mixing atomic and nonatomic accesses, and that also breaks the DRF-SC result. We then describe a semantics based on an explicit operational construction of out-of-order execution, giving the desired behaviour for thin-air examples but exposing further difficulties with accommodating existing compiler optimisations. Finally, we show that there are major difficulties integrating concurrency semantics with the C/C++ notion of undefined behaviour. We hope thereby to stimulate and enable research on this key issue

    Code layout optimizations for transaction processing workloads

    Get PDF

    On partial order semantics for SAT/SMT-based symbolic encodings of weak memory concurrency

    Full text link
    Concurrent systems are notoriously difficult to analyze, and technological advances such as weak memory architectures greatly compound this problem. This has renewed interest in partial order semantics as a theoretical foundation for formal verification techniques. Among these, symbolic techniques have been shown to be particularly effective at finding concurrency-related bugs because they can leverage highly optimized decision procedures such as SAT/SMT solvers. This paper gives new fundamental results on partial order semantics for SAT/SMT-based symbolic encodings of weak memory concurrency. In particular, we give the theoretical basis for a decision procedure that can handle a fragment of concurrent programs endowed with least fixed point operators. In addition, we show that a certain partial order semantics of relaxed sequential consistency is equivalent to the conjunction of three extensively studied weak memory axioms by Alglave et al. An important consequence of this equivalence is an asymptotically smaller symbolic encoding for bounded model checking which has only a quadratic number of partial order constraints compared to the state-of-the-art cubic-size encoding.Comment: 15 pages, 3 figure

    zFENCE

    No full text

    Shared memory consistency models: a tutorial

    No full text

    Efficient sequential consistency via conflict ordering

    No full text

    Transactions as the Foundation of a Memory Consistency Model

    No full text
    We argue for transactions as the synchronization primitive of an ordering-based memory consistency model. Rather than define transactions in terms of locks, our model defines locks, conditions, and atomic/volatile variables in terms of transactions. A traditional critical section, in particular, is a region of code, bracketed by transactions, in which certain data have been privatized. Our memory model, originally published at OPODIS’08, is based on the database notion of strict serializability (SS). In an explicit analogy to the DRF0 of Adve and Hill, we demonstrate that SS provides the appearance of transactional sequential consistency (TSC) for programs that are transactional data-race free (TDRF). We argue against relaxation of the total order on transactions, but show that selective relaxation of the relationship between program order and transaction order (selective strict serializability—SSS) can allow the implementation of transaction-based locks to be as efficient as conventional locks. We also show that condition synchronization (in the form of the transactional retry primitive) can be accommodated in our model without explicit mention of speculation, opacity, or aborted transactions. Finally, we compare SS and SSS to the notion of strong isolation (SI), arguing that SI is neither sufficient for TSC nor necessary in programs that are TDRF
    corecore